Goto

Collaborating Authors

 DeKalb County


Investment in data centers worldwide hit record 61bn in 2025, report finds

The Guardian

A protest against a planned data center in Decatur, Georgia. A protest against a planned data center in Decatur, Georgia. Analysts see'global construction frenzy that shows no signs of slowing' amid surge in demand from AI boom A new report finds that investment in the worldwide data center market reached $61bn this year, setting a new record atop the wave of the artificial intelligence boom. The analysis by S&P Global, first reported by CNBC, documented what the market intelligence firm called a "global construction frenzy that shows no signs of slowing", to build out the massive real estate, hardware, and energy requirements driven by insatiable demand from AI companies. S&P pegged 2024's investment in the data center market at $60.8bn, just below the 2025 number.


From Monocular Vision to Autonomous Action: Guiding Tumor Resection via 3D Reconstruction

Acar, Ayberk, Smith, Mariana, Al-Zogbi, Lidia, Watts, Tanner, Li, Fangjie, Li, Hao, Yilmaz, Nural, Scheikl, Paul Maria, d'Almeida, Jesse F., Sharma, Susheela, Branscombe, Lauren, Ertop, Tayfun Efe, Webster, Robert J. III, Oguz, Ipek, Kuntz, Alan, Krieger, Axel, Wu, Jie Ying

arXiv.org Artificial Intelligence

Surgical automation requires precise guidance and understanding of the scene. Current methods in the literature rely on bulky depth cameras to create maps of the anatomy, however this does not translate well to space-limited clinical applications. Monocular cameras are small and allow minimally invasive surgeries in tight spaces but additional processing is required to generate 3D scene understanding. We propose a 3D mapping pipeline that uses only RGB images to create segmented point clouds of the target anatomy. To ensure the most precise reconstruction, we compare different structure from motion algorithms' performance on mapping the central airway obstructions, and test the pipeline on a downstream task of tumor resection. In several metrics, including post-procedure tissue model evaluation, our pipeline performs comparably to RGB-D cameras and, in some cases, even surpasses their performance. These promising results demonstrate that automation guidance can be achieved in minimally invasive procedures with monocular cameras. This study is a step toward the complete autonomy of surgical robots.


CARE-SD: Classifier-based analysis for recognizing and eliminating stigmatizing and doubt marker labels in electronic health records: model development and validation

Walker, Drew, Thorne, Annie, Das, Sudeshna, Love, Jennifer, Cooper, Hannah LF, Livingston, Melvin III, Sarker, Abeed

arXiv.org Artificial Intelligence

Objective: To detect and classify features of stigmatizing and biased language in intensive care electronic health records (EHRs) using natural language processing techniques. Materials and Methods: We first created a lexicon and regular expression lists from literature-driven stem words for linguistic features of stigmatizing patient labels, doubt markers, and scare quotes within EHRs. The lexicon was further extended using Word2Vec and GPT 3.5, and refined through human evaluation. These lexicons were used to search for matches across 18 million sentences from the de-identified Medical Information Mart for Intensive Care-III (MIMIC-III) dataset. For each linguistic bias feature, 1000 sentence matches were sampled, labeled by expert clinical and public health annotators, and used to supervised learning classifiers. Results: Lexicon development from expanded literature stem-word lists resulted in a doubt marker lexicon containing 58 expressions, and a stigmatizing labels lexicon containing 127 expressions. Classifiers for doubt markers and stigmatizing labels had the highest performance, with macro F1-scores of .84 and .79, positive-label recall and precision values ranging from .71 to .86, and accuracies aligning closely with human annotator agreement (.87). Discussion: This study demonstrated the feasibility of supervised classifiers in automatically identifying stigmatizing labels and doubt markers in medical text, and identified trends in stigmatizing language use in an EHR setting. Additional labeled data may help improve lower scare quote model performance. Conclusions: Classifiers developed in this study showed high model performance and can be applied to identify patterns and target interventions to reduce stigmatizing labels and doubt markers in healthcare systems.



Discovering novel systemic biomarkers in photos of the external eye

Babenko, Boris, Traynis, Ilana, Chen, Christina, Singh, Preeti, Uddin, Akib, Cuadros, Jorge, Daskivich, Lauren P., Maa, April Y., Kim, Ramasamy, Kang, Eugene Yu-Chuan, Matias, Yossi, Corrado, Greg S., Peng, Lily, Webster, Dale R., Semturs, Christopher, Krause, Jonathan, Varadarajan, Avinash V., Hammel, Naama, Liu, Yun

arXiv.org Artificial Intelligence

External eye photos were recently shown to reveal signs of diabetic retinal disease and elevated HbA1c. In this paper, we evaluate if external eye photos contain information about additional systemic medical conditions. We developed a deep learning system (DLS) that takes external eye photos as input and predicts multiple systemic parameters, such as those related to the liver (albumin, AST); kidney (eGFR estimated using the race-free 2021 CKD-EPI creatinine equation, the urine ACR); bone & mineral (calcium); thyroid (TSH); and blood count (Hgb, WBC, platelets). Development leveraged 151,237 images from 49,015 patients with diabetes undergoing diabetic eye screening in 11 sites across Los Angeles county, CA. Evaluation focused on 9 pre-specified systemic parameters and leveraged 3 validation sets (A, B, C) spanning 28,869 patients with and without diabetes undergoing eye screening in 3 independent sites in Los Angeles County, CA, and the greater Atlanta area, GA. We compared against baseline models incorporating available clinicodemographic variables (e.g. age, sex, race/ethnicity, years with diabetes). Relative to the baseline, the DLS achieved statistically significant superior performance at detecting AST>36, calcium<8.6, eGFR<60, Hgb<11, platelets<150, ACR>=300, and WBC<4 on validation set A (a patient population similar to the development sets), where the AUC of DLS exceeded that of the baseline by 5.2-19.4%. On validation sets B and C, with substantial patient population differences compared to the development sets, the DLS outperformed the baseline for ACR>=300 and Hgb<11 by 7.3-13.2%. Our findings provide further evidence that external eye photos contain important biomarkers of systemic health spanning multiple organ systems. Further work is needed to investigate whether and how these biomarkers can be translated into clinical impact.


100 Women of Color Remember Their First Encounter With Racism--And How They Overcame It

#artificialintelligence

Sticks and stones may break my bones, but words will never hurt me. This was a mantra I picked up on the playground at elementary school--something I repeated over and over again anytime I came face to face with racism. It was a coping mechanism meant to guard my heart from the cacophony of discriminatory comments that shaped me as a young Korean American girl growing up in predominantly white spaces. But now that I'm well into adulthood, I think about the girls of color who are also being taught to pretend that words don't hurt--and the people this way of thinking actually protects. It's hard to escape the unrelenting consequences of racism: In the past year alone, we lost Breonna Taylor, George Floyd, Ahmaud Arbery, and the six women of Asian descent murdered in Atlanta (Xiaojie "Emily" Tan, Daoyou Feng, Suncha Kim, Yong Ae Yue, Soon Chung Park, Hyun Jung Grant) at the hands of this insidious disease--and those are just the names that were in the headlines. If we don't acknowledge ...


ForecastQA: A Question Answering Challenge for Event Forecasting

Jin, Woojeong, Kim, Suji, Khanna, Rahul, Lee, Dong-Ho, Morstatter, Fred, Galstyan, Aram, Ren, Xiang

arXiv.org Machine Learning

Event forecasting is a challenging, yet consequential task, as humans seek to constantly plan for the future. Existing automated forecasting approaches rely mostly on structured data, such as time-series or event-based knowledge graphs, to help predict future events. In this work, we formulate the forecasting problem as a restricted-domain, multiple-choice, question-answering (QA) task that simulates the forecasting scenario. To showcase the usefulness of this task formulation, we introduce a dataset ForecastQA, a question-answering dataset consisting of 10,392 event forecasting questions, which have been collected and verified via crowdsourcing efforts. We also present our experiments on ForecastQA using BERT-based models and find that our best model achieves 61.0\% accuracy on the dataset, which is still far behind human performance by about 18%. We hope ForecastQA will support future research efforts in bridging this gap.


Video Friday: Aibo Reborn, Robot Plus HoloLens, and NREC's Formula

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next two months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. We already posted about the unveiling of Sony's new Aibo, but here's a bit of extra video from the event showing the little robotic dog in live action: In this video we show a compilation of our research for the last 4 years on autonomous navigation of bipedal robots. It is part of the DFG-founded project "Versatile and Robust Walking in Uneven Terrain" (German Research Foundation) and includes development in environment perception and modeling, motion planning and stability control.


Making Project Team Recommendations from Online Information Sources

Earl, Charles C. (Virkaz Technologies) | Johnson, Amos (Morehouse College) | Yelpaala, Kaakpema (Yelpaala Good Advisors) | Good, Travis (Yelpaala Good Advisors)

AAAI Conferences

We are developing an Internet platform called MediaTeam that provides a marketplace connecting media content consumers to communities of media content creators. The platform is enabled by our method for automated assembly of virtual project teams. Media creators use the automated team assembler to quickly identify and team with collaborators. The team assembly platform factors in how the skills, work, and communication styles of team members complement each other into its team recommendation process. We are now testing the teaming and collaboration platforms with video creators and seek to launch by the summer.